Quality Test Execution iCube enables you to create ad hoc reports to help ensure that the test instances run efficiently and timely. You can use this iCube to identify blocked, failed, or skipped test executions and analyze the impact it has on the quality and timelines of the project.
Using this iCube, Project Managers and Development Managers can answer some of the following business questions:
Attribute Name | Description |
---|---|
Automated Test Flag |
Flag that indicates if a test execution is automated Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Calendar Date | Gregorian calendar date displayed in the format 'M/D/YYYY' |
Calendar Month | Gregorian calendar month displayed in the format 'Mon YYYY' |
Calendar Quarter | Gregorian calendar quarter displayed in the format 'Q# YYYY' |
Calendar Week | Gregorian calendar week displaying the week number. For example, W21, W22 |
Calendar Year | Gregorian calendar year displayed in the format YYYY |
Created By |
Name of the person who created the test execution Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Date | Date on which the status of the test case moved to 'Created', 'In Progress', 'Resolved', or 'Completed' |
Day of Month | Day of month between 1 and 31 |
Day of Week | Day of week |
Day of Year | Day of the year between 1 to 366 |
End DateTime |
Date and time at which the test case was executed Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Errors |
Details of errors encountered during the test case run Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Executed By | Name of the person who executed the test execution |
Iteration |
Iteration in which the test case was created Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Name | Name of the test case |
Project | Project to which the Work Item is associated |
Region Host | Server corresponding to a region |
Region Location | Location of the data center |
Region Name | Name of the region where the test devices are located, such as India, Europe, and so on. |
Start DateTime | Date and time at which the test case was started in the system |
Status | Current status of the test execution |
Test Application | Unique Identifier of the Test Application |
Test Application Activity | Name of the Test Application Activity |
Test Application Allow Resign Flag | Flag that indicates whether the Test Application allows resign |
Test Application Built Version | Unique Number assigned to different software build of Test Application |
Test Application Camera Support Flag | Flag that indicates whether the Test Application allows camera support |
Test Application Custom Key Store Flag | Flag that indicates whether the Test Application allows Key Store support |
Test Application Distribution Type | Distribution type of the Test Application |
Test Application Fingerprint Support | Flag that indicates whether the Test Application allows Fingerprint support |
Test Application Fix key Chain Access Flag | Flag that indicates whether the Test Application allows Fix Key Chain access support |
Test Application Name | Name of the Test Application |
Test Application Network Capture Support Flag | Flag that indicates whether the Test Application allows Network Capture support |
Test Application Notes | Notes linked to the Test Application |
Test Application Override Entitlements Flag | Flag that indicates whether the Test Application allows Entitlements support |
Test Application Package | Name of the package linked to Test Application |
Test Application Platform Name | Name of the Platform or Operating System linked to Test Application |
Test Application Profile | The profile that should be used along with the Test Application |
Test Application Release Version | Unique Number assigned to different software release of Test Application |
Test Application Simulator Support Flag | Flag that indicates whether the Test Application allows Simulator support |
Test Application Size | Size of the Test Application |
Test Application Unique Name | Unique name of the Test Application |
Test Case |
Test cases which are part of the current test execution Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Test Case Execution Status |
Current status of the test case Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Test Device Cambrionix Port | Number of ports available in the Cambrionix hub used by the test device. |
Test Device Category (Source) | Category of the test device, such as Phone or Tablet |
Test Device Default Language | Default language associated with the test device. For example: en |
Test Device Default Region | Default region associated with the test device, such as US, SG, and so on. |
Test Device Default WiFi SSID | Default Service Set Identifier (SSID) of the Wireless network associated with the test device |
Test Device Host Machine | The server on which the test device is hosted |
Test Device ID | Unique identifier of the test device |
Test Device Last Used Date | Date and time when the test device was last used |
Test Device Manufacturer | Manufacturer name of the test device, such as Apple, Samsung, Xiaomi, and so on. |
Test Device Model | Model number of the test device, such as iPhone 8, iPad Air, Redmi 6, and so on. |
Test Device Name | Name of the test device used |
Test Device Notes | Short description provided for the test device |
Test Device NV Server | Network Virtualization server associated with the test device |
Test Device OS | Operating System of the test device, such as iOS or Android. |
Test Device Screensize | Size of the test device's screen in pixels. For example: 640x1136, 1080x1920. |
Test Device Status (Source) | Status of the test device as mentioned in the source, such as error, offline, online, unauthorized. |
Test Device Uptime | Time, in seconds for which the test device has been working or available. |
Test Device Whitelist Cleanup |
Flag to indicate if the global whitelist cleanup feature is enabled fora test device. 0 and 1 represent Disabled and Enabled respectively. When the property is enabled, the device is cleaned when released and all applications that are not part of the whitelist will be uninstalled. |
Test Execution Cause | Root cause for the test execution failure |
Test Execution Framework (Source) | Type of test automation framework used as mentioned in the source, such as Appium, Selenium, Seetest, and so on. |
Test Execution Has Report Flag | Flag to indicate if a test execution contains a report that summarizes the test |
Test Execution Lagging Count of Days | Count of number of days since the first data record till the current day |
Test Execution Number | Unique identification number associated with the test case |
Test Execution Project Accessibility Testing Mode Flag |
Flag to indicate if Accessibility Testing Mode is enabled or not. By default, all the accessibility features for the differently abled users of the project are enabled. |
Test Execution Project Appium Project Flag | Flag to indicate if an Appium server is used for a project |
Test Execution Project Notes | Short description provided for a project |
Test Execution Project Unique ID | Unique identifier for a project formed using the project name and environment |
Test Execution Report Url | URL or path where the test execution summary report is available |
Test Execution Success Flag | Flag to indicate if the test execution is successful or not |
Test Region Status (Source) | Status of the region as mentioned in the source, such as online, offline, error_not_paired. |
Test Suite |
Test suite which is executed as part of the test execution Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Test Suite Type |
Type of the test suite which is executed Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Test Type |
Type of test case, such as mobile or web. |
Total Requests |
Total number of requests submitted to the application during performance testing Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Total Test Cases |
Total number of test cases created Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Total Users |
Total number of users in the application during performance testing Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Metric Name | Description | Formula | Expected Value |
---|---|---|---|
Actual Test Case Count |
Count of all test cases in the test suite Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Max ([Actual Test Case Count]){[Test Execution Name]+,[Test Execution Number]+,[Test Execution Status (Source)]+,[Test Execution Status (Standardized)]+,[Test Execution Test Cases]+,[Test Suite]+,[Calendar Month]+,[Calendar Q] | NA |
Count of Device Groups | Total number of device groups associated with a device | Sum([Count of Device Groups]) | >=0 |
Count of Device Tags | Total number of tags associated with a device | Sum([Count of Device Tags]) | >=0 |
Duration (ms) | Total time taken for the test execution | Sum ([Test Execution Duration]) | Lesser value is ideal |
No. of Blocked Test Case Executions |
Count of all test case executions that are blocked Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Count ([No of Test Case Executions])<[Test Case Execution = BLOCKED] | Lesser value is ideal |
No. of Blocked Test Executions |
Count of all test executions that are blocked Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Max ([No. of Test Executions])<[Test Execution = BLOCKED] | Lesser value is ideal |
No. of Failed Test Case Executions |
Count of all test case executions that have failed Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Count ([No of Test Case Executions])<[Test Case Execution = FAILED] | Lesser value is ideal |
No. of Failed Test Executions | Count of all test executions that have failed in the run | Max ([No. of Test Executions])<[Test Execution = FAILED] | Lesser value is ideal |
No. of Incomplete test Executions | Count of all test executions whose status is INCOMPLETE | Count([No. of Test Executions]) {~} <[Test Execution = INCOMPLETE]> | Lesser value is ideal |
No. of Passed Test Case Executions |
Count of all test case executions that have passed the test run Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Count ([No of Test Case Executions])<[Test Case Execution = PASSED] | Higher value is ideal |
No. of Passed Test Executions | Count of all test executions that have passed the test run | Max ([No. of Test Executions])<[Test Execution = PASSED] | Higher value is ideal |
No. of Skipped Test Case Executions |
Count of all test case executions that have skipped the test run Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Count ([No of Test Case Executions])<[Test Case Execution = SKIPPED] | Lesser value is ideal |
No. of Skipped Test Executions | Count of all test executions that have skipped the test run | Max ([No. of Test Executions])<[Test Execution = SKIPPED] | Lesser value is ideal |
No. of Test Executions | Count of all test executions | Max ([No. of Test Executions]) | >=0 |
Test Cases Executed |
Total count of all test cases executed Note: This attribute is not supported for the Digital.ai Continuous Testing source system. |
Count ([No of Test Case Executions]) | >=0 |
© 2022 Digital.ai Inc. All rights reserved.